Allow to configure reflection prompt#636
Conversation
WalkthroughThe Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Agent
User->>Agent: Initialize Agent(reflect_prompt=custom?)
User->>Agent: chat()
Agent->>Agent: Generate response
Agent->>Agent: Self-reflection loop
Agent->>Agent: Use reflect_prompt (custom or default)
Agent-->>User: Return response
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Summary of Changes
Hello @LeoVS09, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces the ability to customize the prompt used by the agent for its self-reflection step. This enhancement addresses the issue where the default reflection prompt might consistently yield 'satisfactory' results, enabling users to guide the reflection process with more specific requirements tailored to their use case.
Highlights
- Agent Configuration: Added an optional
reflect_promptparameter to theAgentconstructor, allowing users to provide a custom prompt for the self-reflection process. - Reflection Logic: Modified the internal reflection prompt generation within the
chatmethod to use the user-providedreflect_promptif available, falling back to the default reflection instructions otherwise.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||
There was a problem hiding this comment.
Code Review
This pull request introduces a configurable reflect_prompt parameter to the Agent class, allowing users to customize the reflection instructions for the self-reflection feature. The changes correctly add the new parameter to the __init__ method, store it as an instance variable, and utilize it in the synchronous chat method's reflection logic, falling back to a default prompt if the custom one isn't provided. This successfully addresses the user's goal of providing clearer requirements for reflection.
The implementation within the provided diffs is clear and follows Python best practices (PEP 8).
However, there are a few areas that need attention:
- Inconsistent Feature Implementation: The new
reflect_promptis used in thechatmethod, but the asynchronousachatmethod (specifically around line 1403-1408 in the full file context) still uses the hardcoded reflection prompt. This means the custom reflection prompt feature will not work for asynchronous chat operations, leading to inconsistent behavior. It's important to update theachatmethod to also useself.reflect_promptto ensure feature parity between synchronous and asynchronous operations. The existingTODOcomment inachatabout syncing withchathighlights this area. - Missing Documentation: The docstring for the
Agent.__init__method should be updated to include a description of the newreflect_promptparameter, explaining its purpose and usage. This is important for maintainability and for users to understand how to use this new feature. - Lack of Unit Tests: To ensure the reliability and correctness of this new feature, unit tests should be added. These tests should cover scenarios such as:
- Using the default prompt when
reflect_promptisNone. - Using the custom prompt when
reflect_promptis provided. - Behavior when
reflect_promptis an empty string (should use the default). - Verification of the feature in both
chatandachatmethods (onceachatis updated).
- Using the default prompt when
Addressing these points will significantly improve the completeness and robustness of this enhancement.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
src/praisonai-agents/praisonaiagents/agent/agent.py (3)
370-372: Introduce configurable reflection prompt
The addition ofreflect_prompt: Optional[str] = Nonenext toreflect_llmcorrectly exposes the self-reflection prompt to users. To complete this feature, please update the class docstring (or README) to document the new parameter and its default behavior.
473-475: Persist the custom prompt on the instance
Assigningself.reflect_prompt = reflect_promptaligns with the constructor change. Optionally, you may want to trim or validate the prompt (e.g., ensure it’s not just whitespace) before storing.
1233-1236: Use dynamic prompt in reflection loop
The f-string correctly injectsself.reflect_promptwith a fallback to the original static text. To ensure this works as intended, consider adding unit tests that pass a customreflect_promptand verify it's used in the messages sent to the LLM.Would you like help drafting those tests?
|
@claude review above changes and make sure it’s all correct and fix or optimise if required in this pull request also minimal code change |
|
Claude encountered an error —— View job I'll analyze this and get back to you. |
|
@claude review above changes and make sure it’s all correct and fix or optimise if required in this pull request also minimal code change |
|
Claude encountered an error —— View job I'll analyze this and get back to you. |
|
@claude review above changes and make sure it’s all correct and fix or optimise if required also minimal code change |
|
Claude encountered an error —— View job I'll analyze this and get back to you. |
|
@claude review above changes and make sure it’s all correct and fix or optimise if required also minimal code change. Try it in a different branch |
|
Claude encountered an error —— View job I'll analyze this and get back to you. |
|
@MervinPraison, it appears that Claude cannot add a comment to the PR created from the fork. I can try to push the branch to your repo and recreate the PR. But unfortunately, I do not have access to do it. |
|
@LeoVS09 Please upgrade and test it now. |
|
Thanks for merging changes @MervinPraison. I was busy at work this week. I will let you know once test it |
User description
I currently struggle to get any use from the self-reflection feature. During reflection, LLM always reports that the response is satisfactory. I want to provide clear requirements for reflection through the additional parameter
reflect_prompt.This change was useful in my own project, so it will probably be useful for someone else.
PR Type
Enhancement
Description
• Add configurable
reflect_promptparameter to Agent class• Replace hardcoded reflection prompt with customizable option
• Enable users to provide specific reflection requirements
• Maintain backward compatibility with default prompt fallback
Changes walkthrough 📝
agent.py
Add configurable reflection prompt parametersrc/praisonai-agents/praisonaiagents/agent/agent.py
• Added
reflect_promptparameter to Agentinitmethod• Stored
reflect_promptas instance variable• Modified reflection logic to use
custom prompt when provided
• Added fallback to default prompt for
backward compatibility
Summary by CodeRabbit